The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
Diffusion models, which learn to reverse a signal destruction process to generate new data, typically require the signal at each step to have the same dimension. We argue that, considering the spatial redundancy in image signals, there is no need to maintain a high dimensionality in the evolution process, especially in the early generation phase. To this end, we make a theoretical generalization of the forward diffusion process via signal decomposition. Concretely, we manage to decompose an image into multiple orthogonal components and control the attenuation of each component when perturbing the image. That way, along with the noise strength increasing, we are able to diminish those inconsequential components and thus use a lower-dimensional signal to represent the source, barely losing information. Such a reformulation allows to vary dimensions in both training and inference of diffusion models. Extensive experiments on a range of datasets suggest that our approach substantially reduces the computational cost and achieves on-par or even better synthesis performance compared to baseline methods. We also show that our strategy facilitates high-resolution image synthesis and improves FID of diffusion model trained on FFHQ at $1024\times1024$ resolution from 52.40 to 10.46. Code and models will be made publicly available.
translated by 谷歌翻译
很少有动作识别旨在仅使用少量标记的训练样本识别新型动作类别。在这项工作中,我们提出了一种新颖的方法,该方法首先将每个视频汇总到由一组全球原型和一组集中原型组成的复合原型中,然后比较基于原型的视频相似性。鼓励每个全局原型总结整个视频中的特定方面,例如动作的开始/演变。由于没有针对全球原型提供明确的注释,因此我们使用一组专注的原型专注于视频中的某些时间戳。我们通过匹配支持视频和查询视频之间的复合原型来比较视频相似性。例如,从相同的角度来比较视频,以比较两个动作是否同样开始。对于集中的原型,由于动作在视频中具有各种时间变化,因此我们采用两分匹配,以比较具有不同时间位置和偏移的动作。实验表明,我们提出的方法在多个基准上实现了最先进的结果。
translated by 谷歌翻译
物体负担是人类对象互动中的一个重要概念,它基于人类运动能力和物体的物理特性提供有关行动可能性的信息,从而使任务受益,例如行动预期和机器人模仿学习。但是,现有数据集通常:1)将负担能力与对象功能混合在一起;2)将负担与目标相关的动作混淆;3)忽略人类运动能力。本文提出了一个有效的注释方案,通过将目标 - 毫无疑问的运动动作和将类型抓住为负担性标签,并引入机械作用的概念来解决这些问题,以表示两个对象之间的动作可能性。我们通过将该方案应用于Epic-Kitchens数据集并通过“负担能力识别”等任务来测试我们的注释,从而提供新的注释。我们定性地验证了接受注释训练的模型可以区分负担能力和机械行动。
translated by 谷歌翻译
第一人称行动认可是视频理解中有挑战性的任务。由于强烈的自我运动和有限的视野,第一人称视频中的许多背景或嘈杂的帧可以在其学习过程中分散一个动作识别模型。为了编码更多的辨别特征,模型需要能够专注于视频的最相关的动作识别部分。以前的作品通过应用时间关注但未能考虑完整视频的全局背景来解决此问题,这对于确定相对重要的部分至关重要。在这项工作中,我们提出了一种简单而有效的堆叠的临时注意力模块(STAM),以基于跨越剪辑的全球知识来计算时间注意力,以强调最辨别的特征。我们通过堆叠多个自我注意层来实现这一目标。而不是天真的堆叠,这是实验证明是无效的,我们仔细地设计了每个自我关注层的输入,以便在产生时间注意力期间考虑视频的本地和全局背景。实验表明,我们提出的STAM可以基于大多数现有底座的顶部构建,并提高各个数据集中的性能。
translated by 谷歌翻译
人的凝视是一种成本效益的生理数据,揭示了人类的潜在注意力模式。选择性注意机制有助于通过忽略分散剂的存在,帮助认知系统专注于任务相关的视觉线索。由于这种能力,人类可以有效地从一个非常有限数量的训练样本中学习。灵感来自这种机制,我们旨在利用具有小型训练数据的医学图像分析任务的凝视。我们所提出的框架包括骨干编码器和选择性注意网络(SAN),用于模拟潜在的注意力。 SAN通过估计实际的人的凝视,隐含地编码与医学诊断任务相关的可疑区域。然后我们设计一种新颖的辅助注意力块(AAB),以允许从骨干编码器使用SAN的信息,以专注于选择性区域。具体而言,该块使用多针注意层的修改版本来模拟人类视觉搜索过程。请注意,SAN和AAB可以插入不同的底部,并且在配备有任务特定的头部时,该框架可用于多个医学图像分析任务。我们的方法经过证明在3D肿瘤分割和2D胸X射线分类任务中实现了卓越的性能。我们还表明,SAN的估计凝视概率图与由董事会认证的医生获得的实际凝视固定图一致。
translated by 谷歌翻译
该属性方法通过识别和可视化占据网络输出的输入区域/像素来提供用于以可视化方式解释不透明神经网络的方向。关于视觉上解释视频理解网络的归因方法,由于视频输入中存在的独特的时空依赖性以及视频理解网络的特殊3D卷积或经常性结构,它具有具有挑战性。然而,大多数现有的归因方法专注于解释拍摄单个图像的网络作为输入,并且少量设计用于视频归属的作品来处理视频理解网络的多样化结构。在本文中,我们调查了与多样化视频理解网络兼容的基于通用扰动的归因方法。此外,我们提出了一种新的正则化术语来增强方法,通过限制其归属的平滑度导致空间和时间维度。为了评估不同视频归因方法的有效性而不依赖于手动判断,我们引入了通过新提出的可靠性测量检查的可靠的客观度量。我们通过主观和客观评估和与多种重要归因方法进行比较验证了我们的方法的有效性。
translated by 谷歌翻译
积极的愿景本质上是关注驱动的:代理商积极选择观点,以便快速实现视觉任务,同时改善所观察到的场景的内部表示。受到最近基于关注模型的成功基于单个RGB图像,我们建议使用注意力机制来解决基于多视图深度的主动对象识别,通过开发端到端的反复间3D注意力网络。该架构利用了经常性的神经网络(RNN)来存储和更新内部表示。我们的模型,使用3D形状数据集接受培训,能够迭代地参加定位识别它的感兴趣对象的最佳视图。为了实现3D视图选择,我们得出了一种3D空间变压器网络,可分行,以便利用BackProjagation培训,实现比最现有的基于关注的模型所采用的强化学习更快的收敛。实验表明,我们的方法仅具有深度输入,实现了最先进的下一系列性能,处于时间效率和识别准确性。
translated by 谷歌翻译
Natural Language Processing (NLP) has been revolutionized by the use of Pre-trained Language Models (PLMs) such as BERT. Despite setting new records in nearly every NLP task, PLMs still face a number of challenges including poor interpretability, weak reasoning capability, and the need for a lot of expensive annotated data when applied to downstream tasks. By integrating external knowledge into PLMs, \textit{\underline{K}nowledge-\underline{E}nhanced \underline{P}re-trained \underline{L}anguage \underline{M}odels} (KEPLMs) have the potential to overcome the above-mentioned limitations. In this paper, we examine KEPLMs systematically through a series of studies. Specifically, we outline the common types and different formats of knowledge to be integrated into KEPLMs, detail the existing methods for building and evaluating KEPLMS, present the applications of KEPLMs in downstream tasks, and discuss the future research directions. Researchers will benefit from this survey by gaining a quick and comprehensive overview of the latest developments in this field.
translated by 谷歌翻译
We investigate response generation for multi-turn dialogue in generative-based chatbots. Existing generative models based on RNNs (Recurrent Neural Networks) usually employ the last hidden state to summarize the sequences, which makes models unable to capture the subtle variability observed in different dialogues and cannot distinguish the differences between dialogues that are similar in composition. In this paper, we propose a Pseudo-Variational Gated Recurrent Unit (PVGRU) component without posterior knowledge through introducing a recurrent summarizing variable into the GRU, which can aggregate the accumulated distribution variations of subsequences. PVGRU can perceive the subtle semantic variability through summarizing variables that are optimized by the devised distribution consistency and reconstruction objectives. In addition, we build a Pseudo-Variational Hierarchical Dialogue (PVHD) model based on PVGRU. Experimental results demonstrate that PVGRU can broadly improve the diversity and relevance of responses on two benchmark datasets.
translated by 谷歌翻译